skip to main content


Search for: All records

Creators/Authors contains: "Bai, Yiwei"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Effective solutions to conserve biodiversity require accurate community‐ and species‐level information at relevant, actionable scales and across entire species' distributions. However, data and methodological constraints have limited our ability to provide such information in robust ways. Herein we employ a Deep‐Reasoning Network implementation of the Deep Multivariate Probit Model (DMVP‐DRNets), an end‐to‐end deep neural network framework, to exploit large observational and environmental data sets together and estimate landscape‐scale species diversity and composition at continental extents. We present results from a novel year‐round analysis of North American avifauna using data from over nine million eBird checklists and 72 environmental covariates. We highlight the utility of our information by identifying critical areas of high species diversity for a single group of conservation concern, the North American wood warblers, while capturing spatiotemporal variation in species' environmental associations and interspecific interactions. In so doing, we demonstrate the type of accurate, high‐resolution information on biodiversity that deep learning approaches such as DMVP‐DRNets can provide and that is needed to inform ecological research and conservation decision‐making at multiple scales.

     
    more » « less
  2. Contextual bandit algorithms have become widely used for recommendation in online systems (e.g. marketplaces, music streaming, news), where they now wield substantial influence on which items get shown to users. This raises questions of fairness to the items — and to the sellers, artists, and writers that benefit from this exposure. We argue that the conventional bandit formulation can lead to an undesirable and unfair winner-takes-all allocation of exposure. To remedy this problem, we propose a new bandit objective that guarantees merit-based fairness of exposure to the items while optimizing utility to the users. We formulate fairness regret and reward regret in this setting and present algorithms for both stochastic multi-armed bandits and stochastic linear bandits. We prove that the algorithms achieve sublinear fairness regret and reward regret. Beyond the theoretical analysis, we also provide empirical evidence that these algorithms can allocate exposure to different arms effectively. 
    more » « less